34 research outputs found

    Learning Unitary Operators with Help From u(n)

    Full text link
    A major challenge in the training of recurrent neural networks is the so-called vanishing or exploding gradient problem. The use of a norm-preserving transition operator can address this issue, but parametrization is challenging. In this work we focus on unitary operators and describe a parametrization using the Lie algebra u(n)\mathfrak{u}(n) associated with the Lie group U(n)U(n) of nĂ—nn \times n unitary matrices. The exponential map provides a correspondence between these spaces, and allows us to define a unitary matrix using n2n^2 real coefficients relative to a basis of the Lie algebra. The parametrization is closed under additive updates of these coefficients, and thus provides a simple space in which to do gradient descent. We demonstrate the effectiveness of this parametrization on the problem of learning arbitrary unitary operators, comparing to several baselines and outperforming a recently-proposed lower-dimensional parametrization. We additionally use our parametrization to generalize a recently-proposed unitary recurrent neural network to arbitrary unitary matrices, using it to solve standard long-memory tasks.Comment: 9 pages, 3 figures, 5 figures inc. subfigures, to appear at AAAI-1

    A Generative Model of Words and Relationships from Multiple Sources

    Full text link
    Neural language models are a powerful tool to embed words into semantic vector spaces. However, learning such models generally relies on the availability of abundant and diverse training examples. In highly specialised domains this requirement may not be met due to difficulties in obtaining a large corpus, or the limited range of expression in average use. Such domains may encode prior knowledge about entities in a knowledge base or ontology. We propose a generative model which integrates evidence from diverse data sources, enabling the sharing of semantic information. We achieve this by generalising the concept of co-occurrence from distributional semantics to include other relationships between entities or words, which we model as affine transformations on the embedding space. We demonstrate the effectiveness of this approach by outperforming recent models on a link prediction task and demonstrating its ability to profit from partially or fully unobserved data training labels. We further demonstrate the usefulness of learning from different data sources with overlapping vocabularies.Comment: 8 pages, 5 figures; incorporated feedback from reviewers; to appear in Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence 201

    Striatal Dopamine Transmission Is Subtly Modified in Human A53Tα-Synuclein Overexpressing Mice

    Get PDF
    Mutations in, or elevated dosage of, SNCA, the gene for α-synuclein (α-syn), cause familial Parkinson's disease (PD). Mouse lines overexpressing the mutant human A53Tα-syn may represent a model of early PD. They display progressive motor deficits, abnormal cellular accumulation of α-syn, and deficits in dopamine-dependent corticostriatal plasticity, which, in the absence of overt nigrostriatal degeneration, suggest there are age-related deficits in striatal dopamine (DA) signalling. In addition A53Tα-syn overexpression in cultured rodent neurons has been reported to inhibit transmitter release. Therefore here we have characterized for the first time DA release in the striatum of mice overexpressing human A53Tα-syn, and explored whether A53Tα-syn overexpression causes deficits in the release of DA. We used fast-scan cyclic voltammetry to detect DA release at carbon-fibre microelectrodes in acute striatal slices from two different lines of A53Tα-syn-overexpressing mice, at up to 24 months. In A53Tα-syn overexpressors, mean DA release evoked by a single stimulus pulse was not different from wild-types, in either dorsal striatum or nucleus accumbens. However the frequency responsiveness of DA release was slightly modified in A53Tα-syn overexpressors, and in particular showed slight deficiency when the confounding effects of striatal ACh acting at presynaptic nicotinic receptors (nAChRs) were antagonized. The re-release of DA was unmodified after single-pulse stimuli, but after prolonged stimulation trains, A53Tα-syn overexpressors showed enhanced recovery of DA release at old age, in keeping with elevated striatal DA content. In summary, A53Tα-syn overexpression in mice causes subtle changes in the regulation of DA release in the striatum. While modest, these modifications may indicate or contribute to striatal dysfunction

    Identification of active transcriptional regulatory elements from GRO-seq data

    Get PDF
    Modifications to the global run-on and sequencing (GRO-seq) protocol that enrich for 5'-capped RNAs can be used to reveal active transcriptional regulatory elements (TREs) with high accuracy. Here, we introduce discriminative regulatory-element detection from GRO-seq (dREG), a sensitive machine learning method that uses support vector regression to identify active TREs from GRO-seq data without requiring cap-based enrichment (https://github.com/Danko-Lab/dREG/). This approach allows TREs to be assayed together with gene expression levels and other transcriptional features in a single experiment. Predicted TREs are more enriched for several marks of transcriptional activation-including expression quantitative trait loci, disease-associated polymorphisms, acetylated histone 3 lysine 27 (H3K27ac) and transcription factor binding-than those identified by alternative functional assays. Using dREG, we surveyed TREs in eight human cell types and provide new insights into global patterns of TRE function
    corecore